9 research outputs found

    Silver Standard Masks for Data Augmentation Applied to Deep-Learning-Based Skull-Stripping

    Full text link
    The bottleneck of convolutional neural networks (CNN) for medical imaging is the number of annotated data required for training. Manual segmentation is considered to be the "gold-standard". However, medical imaging datasets with expert manual segmentation are scarce as this step is time-consuming and expensive. We propose in this work the use of what we refer to as silver standard masks for data augmentation in deep-learning-based skull-stripping also known as brain extraction. We generated the silver standard masks using the consensus algorithm Simultaneous Truth and Performance Level Estimation (STAPLE). We evaluated CNN models generated by the silver and gold standard masks. Then, we validated the silver standard masks for CNNs training in one dataset, and showed its generalization to two other datasets. Our results indicated that models generated with silver standard masks are comparable to models generated with gold standard masks and have better generalizability. Moreover, our results also indicate that silver standard masks could be used to augment the input dataset at training stage, reducing the need for manual segmentation at this step

    Iamxt: max-tree toolbox for image processing and analysis

    Get PDF
    The iamxt is an array-based max-tree toolbox implemented in Python using the NumPy library for array processing. It has state of the art methods for building and processing the max-tree, and a large set of visualization tools that allow to view the tree and the contents of its nodes. The array-based programming style and max-tree representation used in the toolbox make it simple to use. The intended audience of this toolbox includes mathematical morphology students and researchers that want to develop research in the field and image processing researchers that need a toolbox simple to use and easy to integrate in their applications68184CONSELHO NACIONAL DE DESENVOLVIMENTO CIENTÍFICO E TECNOLÓGICO - CNPQFUNDAÇÃO DE AMPARO À PESQUISA DO ESTADO DE SÃO PAULO - FAPESP311228/2014-32013/23514-0; 2013/07559-

    Etiology-based classification of brain white matter hyperintensity on magnetic resonance imaging

    Get PDF
    Brain white matter lesions found upon magnetic resonance imaging are often observed in psychiatric or neurological patients. Individuals with these lesions present a more significant cognitive impairment when compared with individuals without them. We propose a computerized method to distinguish tissue containing white matter lesions of different etiologies (e.g., demyelinating or ischemic) using texture-based classifiers. Texture attributes were extracted from manually selected regions of interest and used to train and test supervised classifiers. Experiments were conducted to evaluate texture attribute discrimination and classifiers' performances. The most discriminating texture attributes were obtained from the gray-level histogram and from the co-occurrence matrix. The best classifier was the support vector machine, which achieved an accuracy of 87.9% in distinguishing lesions with different etiologies and an accuracy of 99.29% in distinguishing normal white matter from white matter lesions. (c) 2015 Society of Photo-Optical Instrumentation Engineers (SPIE)21COORDENAÇÃO DE APERFEIÇOAMENTO DE PESSOAL DE NÍVEL SUPERIOR - CAPESFUNDAÇÃO DE AMPARO À PESQUISA DO ESTADO DE SÃO PAULO - FAPESPnão tem2012/21826-1; 2013/07559-

    Multi-Coil MRI Reconstruction Challenge -- Assessing Brain MRI Reconstruction Models and their Generalizability to Varying Coil Configurations

    Full text link
    Deep-learning-based brain magnetic resonance imaging (MRI) reconstruction methods have the potential to accelerate the MRI acquisition process. Nevertheless, the scientific community lacks appropriate benchmarks to assess MRI reconstruction quality of high-resolution brain images, and evaluate how these proposed algorithms will behave in the presence of small, but expected data distribution shifts. The Multi-Coil Magnetic Resonance Image (MC-MRI) Reconstruction Challenge provides a benchmark that aims at addressing these issues, using a large dataset of high-resolution, three-dimensional, T1-weighted MRI scans. The challenge has two primary goals: 1) to compare different MRI reconstruction models on this dataset and 2) to assess the generalizability of these models to data acquired with a different number of receiver coils. In this paper, we describe the challenge experimental design, and summarize the results of a set of baseline and state of the art brain MRI reconstruction models. We provide relevant comparative information on the current MRI reconstruction state-of-the-art and highlight the challenges of obtaining generalizable models that are required prior to broader clinical adoption. The MC-MRI benchmark data, evaluation code and current challenge leaderboard are publicly available. They provide an objective performance assessment for future developments in the field of brain MRI reconstruction

    iamxt: Max-tree toolbox for image processing and analysis

    No full text
    The iamxt is an array-based max-tree toolbox implemented in Python using the NumPy library for array processing. It has state of the art methods for building and processing the max-tree, and a large set of visualization tools that allow to view the tree and the contents of its nodes. The array-based programming style and max-tree representation used in the toolbox make it simple to use. The intended audience of this toolbox includes mathematical morphology students and researchers that want to develop research in the field and image processing researchers that need a toolbox simple to use and easy to integrate in their applications. Keywords: Max-tree, Component tree, Mathematical morpholog

    Convolutional neural networks for skull-stripping in brain MR imaging using silver standard masks

    No full text
    Manual annotation is considered to be the “gold standard” in medical imaging analysis. However, medical imaging datasets that include expert manual segmentation are scarce as this step is time-consuming, and therefore expensive. Moreover, single-rater manual annotation is most often used in data-driven approaches making the network biased to only that single expert. In this work, we propose a CNN for brain extraction in magnetic resonance (MR) imaging, that is fully trained with what we refer to as “silver standard” masks. Therefore, eliminating the cost associated with manual annotation. Silver standard masks are generated by forming the consensus from a set of eight, public, non-deep-learning-based brain extraction methods using the Simultaneous Truth and Performance Level Estimation (STAPLE) algorithm. Our method consists of (1) developing a dataset with “silver standard” masks as input, and implementing (2) a tri-planar method using parallel 2D U-Net-based convolutional neural networks (CNNs) (referred to as CONSNet). This term refers to our integrated approach, i.e., training with silver standard masks and using a 2D U-Net-based architecture. We conducted our analysis using three public datasets: the Calgary-Campinas-359 (CC-359), the LONI Probabilistic Brain Atlas (LPBA40), and the Open Access Series of Imaging Studies (OASIS). Five performance metrics were used in our experiments: Dice coefficient, sensitivity, specificity, Hausdorff distance, and symmetric surface-to-surface mean distance. Our results showed that we outperformed (i.e., larger Dice coefficients) the current state-of-the-art skull-stripping methods without using gold standard annotation for the CNNs training stage. CONSNet is the first deep learning approach that is fully trained using silver standard data and is, thus, more generalizable. Using these masks, we eliminate the cost of manual annotation, decreased inter-/intra-rater variability, and avoided CNN segmentation overfitting towards one specific manual annotation guideline that can occur when gold standard masks are used. Moreover, once trained, our method takes few seconds to process a typical brain image volume using modern a high-end GPU. In contrast, many of the other competitive methods have processing times in the order of minutes984858CAPES - Coordenação de Aperfeiçoamento de Pessoal e Nível SuperiorCNPQ - Conselho Nacional de Desenvolvimento Científico e TecnológicoFAPESP – Fundação de Amparo à Pesquisa Do Estado De São Paulo88881.062158/2014-01308311/2016-7; 311228/2014-32013/07559-3; 2016/18332-8This project was supported by FAPESP CEPID-BRAINN (2013/07559-3) and CAPES PVE (88881.062158/2014-01). Oeslle Lucena thanks FAPESP (2016/18332-8), Roberto Souza thanks the Natural Science and Engineering Research Council of Canada Collaborative Research and Training Experience International and Industrial Imaging Training (NSERC CREATE I3T) Program and the Hotchkiss Brain Institute, Letícia Rittner thanks CNPq (308311/2016-7), Richard Frayne is supported by the NSERC (261754-2013), Canadian Institutes for Health Research (CIHR, MOP-333931) and the Hopewell Professorship in Brain Imaging, and Roberto Lotufo thanks CNPq (311228/2014-3
    corecore